Skip to content

Conversation

@christopherholland-workday

Overview

Part of advisory https://github.com/FlowiseAI/Flowise/security/advisories/GHSA-2x8m-83vc-6wv4

This PR fixes multiple SSRF bypass issues in the HTTP security wrappers by eliminating DNS rebinding (TOCTOU) vulnerabilities and enforcing deny-list validation at request time.

The changes ensure that every outbound HTTP request uses a DNS-pinned connection that matches the validated IP, including across redirects, and that insecure default behavior is removed.

Solution

1. Default insecure configuration

Previously, if HTTP_DENY_LIST was unset, requests were allowed without restriction, including access to localhost and private IP ranges.

Fix:
Requests now fail fast when HTTP_DENY_LIST is not defined, ensuring SSRF protections are always enforced.

DNS rebinding (TOCTOU) vulnerability

The previous implementation validated hostnames using dns.lookup, but the HTTP client would perform a second DNS resolution when opening the socket. This allowed attackers to return a safe IP during validation and switch to a blocked IP during connection. This seemed to be the case for both the secureFetch and secureAxiosRequest methods.

Callers of these methods, like utils#crawl should be protected via strengthening of these methods

Fix:
DNS resolution and validation now occur immediately before each request, and the resolved IP is pinned into the socket using a custom http.Agent / https.Agent. This guarantees the request connects to the same IP that was validated.

Redirect safety

Redirects were previously followed automatically by the HTTP client, allowing bypass of validation on redirected URLs.

Fix:
Automatic redirects are disabled. Each redirect target is:

  • explicitly resolved,
  • validated against the deny list,
  • and connected using a newly pinned agent.

Redirect method rewriting (e.g. POST → GET on 303) follows HTTP specifications.

node-fetch agent handling

When using [email protected], passing an Agent instance directly may not guarantee it is used, potentially allowing DNS re-resolution.

Fix:
The agent is now passed as a function (agent: () => agent), which node-fetch v2 reliably invokes, ensuring the pinned agent is always used.

Part of advisory https://github.com/FlowiseAI/Flowise/security/advisories/GHSA-2x8m-83vc-6wv4

Ensures that the validated host/ip is the same one used when resolving later on. Refactoring methods secureFetch and secureAxiosRequest to use resolver helper methods
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @christopherholland-workday, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses critical Server-Side Request Forgery (SSRF) bypass vulnerabilities by implementing robust security measures for outbound HTTP requests. It focuses on eliminating DNS rebinding (TOCTOU) issues, enforcing deny-list validation at the point of connection, and securing redirect handling. The changes ensure that all requests utilize DNS-pinned connections that match validated IP addresses, thereby removing insecure default behaviors and significantly strengthening the application's defense against network-based attacks.

Highlights

  • Enforced Deny List Configuration: Requests now explicitly fail if the "HTTP_DENY_LIST" environment variable is not defined, ensuring that Server-Side Request Forgery (SSRF) protections are always active and preventing unrestricted access to local and private networks.
  • DNS Rebinding (TOCTOU) Prevention: The system now performs DNS resolution and validation immediately before each HTTP request, pinning the resolved IP address to the socket using custom "http.Agent" or "https.Agent" instances. This eliminates the time-of-check-to-time-of-use (TOCTOU) vulnerability where an attacker could switch IPs between validation and connection.
  • Enhanced Redirect Handling: Automatic redirects are disabled for both "axios" and "node-fetch" requests. Each redirect target is now manually resolved, validated against the deny list, and connected using a newly pinned agent, preventing bypasses through redirect chains. HTTP specification-compliant method rewriting (e.g., POST to GET on 303) is also implemented.
  • Reliable node-fetch Agent Usage: For "node-fetch" v2, the custom agent is now passed as a function ("agent: () => agent") to ensure it is reliably invoked and used, guaranteeing that the pinned agent is always applied and preventing unintended DNS re-resolution.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses a critical DNS Rebinding (TOCTOU) vulnerability in secureAxiosRequest and secureFetch by implementing DNS pinning. The changes ensure that outbound requests connect to the same IP that was validated, even across redirects. I also appreciate the move to a secure-by-default configuration that requires HTTP_DENY_LIST to be set. The code is much cleaner and more secure. I have a few minor suggestions to improve code consistency and clarity.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
const resolved = await resolveAndValidate(currentUrl)
const agent = createPinnedAgent(resolved)

const response = await fetch(currentUrl, { ...currentInit, agent: () => agent })

Check failure

Code scanning / CodeQL

Server-side request forgery Critical

The
URL
of this request depends on a
user-provided value
.

Copilot Autofix

AI 1 day ago

In general, to fix this kind of SSRF issue you ensure that user-controlled input cannot directly choose an arbitrary request target. Instead, you validate or transform the input into a safe value: enforce allowed schemes (typically http/https only), disallow raw IPs if appropriate, and restrict hostnames to an allow-list or at least block link-local/metadata endpoints and private networks regardless of environment configuration. You should also ensure these checks are applied consistently wherever user-controlled URLs are used.

For this codebase, the best minimal fix without changing existing functionality is to strengthen checkDenyList in packages/components/src/httpSecurity.ts so it always enforces baseline scheme and host/IP restrictions, even if HTTP_DENY_LIST is not set. All user-controlled URLs in this flow go through checkDenyList already (fetch-links service calls it explicitly; xmlScrape calls secureFetch, which uses resolveAndValidate internally), so augmenting checkDenyList to also reject non-HTTP(S) schemes and clearly dangerous hostnames/IPs will ensure secureFetch cannot be used for SSRF against internal/metadata endpoints. Concretely:

  • Parse the URL and immediately reject:
    • Non-http/https schemes.
    • Hostnames like localhost, 127.0.0.1, ::1, and well-known cloud metadata hostnames (169.254.169.254, metadata.google.internal, etc.).
  • Maintain the existing deny-list behavior, but apply it in addition to the new static rules.
  • To avoid code duplication and future errors, introduce a small helper (e.g. isDeniedHostname) at the top of checkDenyList to encapsulate these checks before doing DNS resolution.

This keeps the public API the same (checkDenyList signature unchanged) and requires no changes to the callers. All edits are confined to the checkDenyList function in packages/components/src/httpSecurity.ts; no new imports are needed because we reuse URL and existing ipaddr/dns imports.


Suggested changeset 1
packages/components/src/httpSecurity.ts

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/packages/components/src/httpSecurity.ts b/packages/components/src/httpSecurity.ts
--- a/packages/components/src/httpSecurity.ts
+++ b/packages/components/src/httpSecurity.ts
@@ -34,16 +34,45 @@
 
 /**
  * Checks if a URL is allowed based on HTTP_DENY_LIST environment variable
+ * and built-in SSRF protections (scheme and hostname/IP restrictions).
  * @param url - URL to check
- * @throws Error if URL hostname resolves to a denied IP
+ * @throws Error if URL hostname or resolved IP is denied
  */
 export async function checkDenyList(url: string): Promise<void> {
+    const urlObj = new URL(url)
+    const hostname = urlObj.hostname
+    const protocol = urlObj.protocol.toLowerCase()
+
+    // Enforce allowed protocols
+    if (protocol !== 'http:' && protocol !== 'https:') {
+        throw new Error(`Access to this URL is denied by policy: unsupported protocol "${protocol}"`)
+    }
+
+    // Block obvious local/metadata hosts regardless of HTTP_DENY_LIST
+    const loweredHost = hostname.toLowerCase()
+    const blockedHostnames = new Set<string>([
+        'localhost',
+        '127.0.0.1',
+        '::1',
+        '169.254.169.254',          // AWS/GCP metadata IP
+        'metadata.google.internal', // GCP metadata hostname
+        '169.254.169.250',
+        '169.254.170.2'             // AWS ECS task metadata
+    ])
+    if (blockedHostnames.has(loweredHost)) {
+        throw new Error('Access to this host is denied by policy.')
+    }
+
     const httpDenyListString: string | undefined = process.env.HTTP_DENY_LIST
-    if (!httpDenyListString) return
+    if (!httpDenyListString) {
+        // Even if no explicit deny list is configured, still prevent direct IP access
+        if (ipaddr.isValid(hostname)) {
+            throw new Error('Access to this host is denied by policy.')
+        }
+        return
+    }
 
     const httpDenyList = httpDenyListString.split(',').map((ip) => ip.trim())
-    const urlObj = new URL(url)
-    const hostname = urlObj.hostname
 
     if (ipaddr.isValid(hostname)) {
         isDeniedIP(hostname, httpDenyList)
EOF
@@ -34,16 +34,45 @@

/**
* Checks if a URL is allowed based on HTTP_DENY_LIST environment variable
* and built-in SSRF protections (scheme and hostname/IP restrictions).
* @param url - URL to check
* @throws Error if URL hostname resolves to a denied IP
* @throws Error if URL hostname or resolved IP is denied
*/
export async function checkDenyList(url: string): Promise<void> {
const urlObj = new URL(url)
const hostname = urlObj.hostname
const protocol = urlObj.protocol.toLowerCase()

// Enforce allowed protocols
if (protocol !== 'http:' && protocol !== 'https:') {
throw new Error(`Access to this URL is denied by policy: unsupported protocol "${protocol}"`)
}

// Block obvious local/metadata hosts regardless of HTTP_DENY_LIST
const loweredHost = hostname.toLowerCase()
const blockedHostnames = new Set<string>([
'localhost',
'127.0.0.1',
'::1',
'169.254.169.254', // AWS/GCP metadata IP
'metadata.google.internal', // GCP metadata hostname
'169.254.169.250',
'169.254.170.2' // AWS ECS task metadata
])
if (blockedHostnames.has(loweredHost)) {
throw new Error('Access to this host is denied by policy.')
}

const httpDenyListString: string | undefined = process.env.HTTP_DENY_LIST
if (!httpDenyListString) return
if (!httpDenyListString) {
// Even if no explicit deny list is configured, still prevent direct IP access
if (ipaddr.isValid(hostname)) {
throw new Error('Access to this host is denied by policy.')
}
return
}

const httpDenyList = httpDenyListString.split(',').map((ip) => ip.trim())
const urlObj = new URL(url)
const hostname = urlObj.hostname

if (ipaddr.isValid(hostname)) {
isDeniedIP(hostname, httpDenyList)
Copilot is powered by AI and may make mistakes. Always verify output.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is by design/an artifact from a previous commit: https://github.com/FlowiseAI/Flowise/blob/main/packages/components/src/httpSecurity.ts#L178

Will check with the team if there are other actions required to fix this

}

async function resolveAndValidate(url: string): Promise<ResolvedTarget> {
const denyListString = process.env.HTTP_DENY_LIST
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By forcing every users to specify HTTP_DENY_LIST, its not user/begineer friendly.

The decision we took is to allow users to have the option to specify if needed, otherwise its not blocking by default. Its not secured by default. We have also documented here

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, I will revert this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants